Goto

Collaborating Authors

 label model





Mitigating Source Bias for Fairer Weak Supervision

Neural Information Processing Systems

Theoretically, we show that it is possible for our approach to simultaneously improve both accuracy and fairness--in contrast to standard fairness approaches that suffer from tradeoffs. Empirically, we show that our technique improves accuracy on weak supervision baselines by as much as 32% while reducing demographic parity gap by 82.5%.



DP-SSL: TowardsRobustSemi-supervisedLearning withAFewLabeledSamples

Neural Information Processing Systems

However, when the size of labeled data is very small (say a few labeled samples per class), SSL performs poorly and unstably, possibly due to the low qualityoflearnedpseudolabels.Inthispaper,weproposeanewSSLmethodcalled DP-SSL that adopts an innovative data programming (DP) scheme to generate probabilistic labels for unlabeled data. Different from existing DP methods that rely on human experts to provide initial labeling functions (LFs), we develop a multiple-choice learning (MCL) based approach to automatically generate LFs fromscratchinSSLstyle. Withthenoisylabelsproduced bytheLFs,wedesign a label model to resolve the conflict and overlap among the noisy labels, and finally infer probabilistic labels for unlabeled samples.





UnderstandingProgrammaticWeakSupervision viaSource-awareInfluenceFunction

Neural Information Processing Systems

Toachievethis, webuildonInfluenceFunction(IF)andproposesource-awareIF 2,whichleverages the generation process of the probabilistic labels to decompose the end model's training objective and then calculate the influence associated with each (data, source, class)tuple.